Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Abstract Reconstructing the incidence of SARS-CoV-2 infection is central to understanding the state of the pandemic. Seroprevalence studies are often used to assess cumulative infections as they can identify asymptomatic infection. Since July 2020, commercial laboratories have conducted nationwide serosurveys for the U.S. CDC. They employed three assays, with different sensitivities and specificities, potentially introducing biases in seroprevalence estimates. Using models, we show that accounting for assays explains some of the observed state-to-state variation in seroprevalence, and when integrating case and death surveillance data, we show that when using the Abbott assay, estimates of proportions infected can differ substantially from seroprevalence estimates. We also found that states with higher proportions infected (before or after vaccination) had lower vaccination coverages, a pattern corroborated using a separate dataset. Finally, to understand vaccination rates relative to the increase in cases, we estimated the proportions of the population that received a vaccine prior to infection.more » « less
- 
            Probabilistic predictions support public health planning and decision making, especially in infectious disease emergencies. Aggregating outputs from multiple models yields more robust predictions of outcomes and associated uncertainty. While the selection of an aggregation method can be guided by retrospective performance evaluations, this is not always possible. For example, if predictions are conditional on assumptions about how the future will unfold (e.g. possible interventions), these assumptions may never materialize, precluding any direct comparison between predictions and observations. Here, we summarize literature on aggregating probabilistic predictions, illustrate various methods for infectious disease predictions via simulation, and present a strategy for choosing an aggregation method when empirical validation cannot be used. We focus on the linear opinion pool (LOP) and Vincent average, common methods that make different assumptions about between-prediction uncertainty. We contend that assumptions of the aggregation method should align with a hypothesis about how uncertainty is expressed within and between predictions from different sources. The LOP assumes that between-prediction uncertainty is meaningful and should be retained, while the Vincent average assumes that between-prediction uncertainty is akin to sampling error and should not be preserved. We provide an R package for implementation. Given the rising importance of multi-model infectious disease hubs, our work provides useful guidance on aggregation and a deeper understanding of the benefits and risks of different approaches.more » « less
- 
            MacPherson, Peter (Ed.)BackgroundCoronavirus Disease 2019 (COVID-19) continues to cause significant hospitalizations and deaths in the United States. Its continued burden and the impact of annually reformulated vaccines remain unclear. Here, we present projections of COVID-19 hospitalizations and deaths in the United States for the next 2 years under 2 plausible assumptions about immune escape (20% per year and 50% per year) and 3 possible CDC recommendations for the use of annually reformulated vaccines (no recommendation, vaccination for those aged 65 years and over, vaccination for all eligible age groups based on FDA approval). Methods and findingsThe COVID-19 Scenario Modeling Hub solicited projections of COVID-19 hospitalization and deaths between April 15, 2023 and April 15, 2025 under 6 scenarios representing the intersection of considered levels of immune escape and vaccination. Annually reformulated vaccines are assumed to be 65% effective against symptomatic infection with strains circulating on June 15 of each year and to become available on September 1. Age- and state-specific coverage in recommended groups was assumed to match that seen for the first (fall 2021) COVID-19 booster. State and national projections from 8 modeling teams were ensembled to produce projections for each scenario and expected reductions in disease outcomes due to vaccination over the projection period.From April 15, 2023 to April 15, 2025, COVID-19 is projected to cause annual epidemics peaking November to January. In the most pessimistic scenario (high immune escape, no vaccination recommendation), we project 2.1 million (90% projection interval (PI) [1,438,000, 4,270,000]) hospitalizations and 209,000 (90% PI [139,000, 461,000]) deaths, exceeding pre-pandemic mortality of influenza and pneumonia. In high immune escape scenarios, vaccination of those aged 65+ results in 230,000 (95% confidence interval (CI) [104,000, 355,000]) fewer hospitalizations and 33,000 (95% CI [12,000, 54,000]) fewer deaths, while vaccination of all eligible individuals results in 431,000 (95% CI: 264,000–598,000) fewer hospitalizations and 49,000 (95% CI [29,000, 69,000]) fewer deaths. ConclusionsCOVID-19 is projected to be a significant public health threat over the coming 2 years. Broad vaccination has the potential to substantially reduce the burden of this disease, saving tens of thousands of lives each year.more » « less
- 
            Abstract Accurate forecasts can enable more effective public health responses during seasonal influenza epidemics. For the 2021–22 and 2022–23 influenza seasons, 26 forecasting teams provided national and jurisdiction-specific probabilistic predictions of weekly confirmed influenza hospital admissions for one-to-four weeks ahead. Forecast skill is evaluated using the Weighted Interval Score (WIS), relative WIS, and coverage. Six out of 23 models outperform the baseline model across forecast weeks and locations in 2021–22 and 12 out of 18 models in 2022–23. Averaging across all forecast targets, the FluSight ensemble is the 2ndmost accurate model measured by WIS in 2021–22 and the 5thmost accurate in the 2022–23 season. Forecast skill and 95% coverage for the FluSight ensemble and most component models degrade over longer forecast horizons. In this work we demonstrate that while the FluSight ensemble was a robust predictor, even ensembles face challenges during periods of rapid change.more » « lessFree, publicly-accessible full text available December 1, 2025
- 
            Larremore, Daniel B (Ed.)During the COVID-19 pandemic, forecasting COVID-19 trends to support planning and response was a priority for scientists and decision makers alike. In the United States, COVID-19 forecasting was coordinated by a large group of universities, companies, and government entities led by the Centers for Disease Control and Prevention and the US COVID-19 Forecast Hub (https://covid19forecasthub.org). We evaluated approximately 9.7 million forecasts of weekly state-level COVID-19 cases for predictions 1–4 weeks into the future submitted by 24 teams from August 2020 to December 2021. We assessed coverage of central prediction intervals and weighted interval scores (WIS), adjusting for missing forecasts relative to a baseline forecast, and used a Gaussian generalized estimating equation (GEE) model to evaluate differences in skill across epidemic phases that were defined by the effective reproduction number. Overall, we found high variation in skill across individual models, with ensemble-based forecasts outperforming other approaches. Forecast skill relative to the baseline was generally higher for larger jurisdictions (e.g., states compared to counties). Over time, forecasts generally performed worst in periods of rapid changes in reported cases (either in increasing or decreasing epidemic phases) with 95% prediction interval coverage dropping below 50% during the growth phases of the winter 2020, Delta, and Omicron waves. Ideally, case forecasts could serve as a leading indicator of changes in transmission dynamics. However, while most COVID-19 case forecasts outperformed a naïve baseline model, even the most accurate case forecasts were unreliable in key phases. Further research could improve forecasts of leading indicators, like COVID-19 cases, by leveraging additional real-time data, addressing performance across phases, improving the characterization of forecast confidence, and ensuring that forecasts were coherent across spatial scales. In the meantime, it is critical for forecast users to appreciate current limitations and use a broad set of indicators to inform pandemic-related decision making.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
